An introduction to Predictive Modeling in Python


In [ ]:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd

import warnings
warnings.simplefilter('ignore', DeprecationWarning)

Loading tabular data from the Titanic kaggle challenge in a pandas Data Frame

Let us have a look at the Titanic dataset from the Kaggle Getting Started challenge at:

https://www.kaggle.com/c/titanic-gettingStarted

We can load the CSV file as a pandas data frame in one line:


In [ ]:
#data = pd.read_csv('../datasets/titanic_train.csv')
data = pd.read_csv('https://dl.dropboxusercontent.com/u/5743203/data/titanic/titanic_train.csv')

pandas data frames have a HTML table representation in the IPython notebook. Let's have a look at the first 5 rows:


In [ ]:
data.head(5)

In [ ]:
data.count()

The data frame has 891 rows. Some passengers have missing information though: in particular Age and Cabin info can be missing. The meaning of the columns is explained on the challenge website:

https://www.kaggle.com/c/titanic-gettingStarted/data

A data frame can be converted into a numpy array by calling the values attribute:


In [ ]:
data.values

However this cannot be directly fed to a scikit-learn model:

  • the target variable (survival) is mixed with the input data

  • some attribute such as unique ids have no predictive values for the task

  • the values are heterogeneous (string labels for categories, integers and floating point numbers)

  • some attribute values are missing (nan: "not a number")

Predicting survival

The goal of the challenge is to predict whether a passenger has survived from others known attribute. Let us have a look at the Survived columns:


In [ ]:
data.Survived.dtype

data.Survived is an instance of the pandas Series class with an integer dtype:


In [ ]:
data.Survived.__class__.__module__, data.Survived.__class__.__name__

The data object is an instance pandas DataFrame class:


In [ ]:
data.__class__.__module__, data.__class__.__name__

Series can be seen as homegeneous, 1D columns. DataFrame instances are heterogenous collections of columns with the same length.

The original data frame can be aggregated by counting rows for each possible value of the Survived column:


In [ ]:
data.groupby('Survived').count()['Survived']

In [ ]:
np.mean(data.Survived == 0)

From this the subset of the full passengers list, about 2/3 perished in the event. So if we are to build a predictive model from this data, a baseline model to compare the performance to would be to always predict death. Such a constant model would reach around 62% predictive accuracy (which is higher than predicting at random):

pandas Series instances can be converted to regular 1D numpy arrays by using the values attribute:


In [ ]:
target = data.Survived.values

In [ ]:
type(target)

In [ ]:
target.dtype

In [ ]:
target[:5]

Training a predictive model on numerical features

sklearn estimators all work with homegeneous numerical feature descriptors passed as a numpy array. Therefore passing the raw data frame will not work out of the box.

Let us start simple and build a first model that only uses readily available numerical features as input, namely data.Fare, data.Pclass and data.Age.


In [ ]:
numerical_features = data.get(['Fare', 'Pclass', 'Age'])
numerical_features.head(5)

Unfortunately some passengers do not have age information:


In [ ]:
numerical_features.count()

Let's use pandas fillna method to input the median age for those passengers:


In [ ]:
median_features = numerical_features.dropna().median()
median_features

In [ ]:
imputed_features = numerical_features.fillna(median_features)
imputed_features.count()

In [ ]:
imputed_features.head(5)

Now that the data frame is clean, we can convert it into an homogeneous numpy array of floating point values:


In [ ]:
features_array = imputed_features.values
features_array

Let's take the 80% of the data for training a first model and keep 20% for computing is generalization score:


In [ ]:
from sklearn.cross_validation import train_test_split

features_train, features_test, target_train, target_test = train_test_split(
    features_array, target, test_size=0.20, random_state=0)

In [ ]:
features_train.shape

In [ ]:
features_test.shape

In [ ]:
target_train.shape

In [ ]:
target_test.shape

Let's start with a simple model from sklearn, namely LogisticRegression:


In [ ]:
from sklearn.linear_model import LogisticRegression

logreg = LogisticRegression(C=1)
logreg.fit(features_train, target_train)

In [ ]:
target_predicted = logreg.predict(features_test)

In [ ]:
from sklearn.metrics import accuracy_score

accuracy_score(target_test, target_predicted)

This first model has around 73% accuracy: this is better than our baseline that always predicts death.

Model evaluation and interpretation

Interpreting linear model weights

The coef_ attribute of a fitted linear model such as LogisticRegression holds the weights of each features:


In [ ]:
feature_names = numerical_features.columns.values
feature_names

In [ ]:
logreg.coef_

In [ ]:
x = np.arange(len(feature_names))
plt.bar(x, logreg.coef_.ravel())
_ = plt.xticks(x + 0.5, feature_names, rotation=30)

In this case, survival is slightly positively linked with Fare (the higher the fare, the higher the likelyhood the model will predict survival) while passenger from first class and lower ages are predicted to survive more often than older people from the 3rd class.

First-class cabins where closer to the lifeboats and children and women reportedly had the priority. Our model seems to capture that historical data. We will see later if the sex of the passenger can be used as an informative predictor to increase the predictive accuracy of the model.

Alternative evaluation metrics

Logistic Regression is a probabilistic models: instead of just predicting a binary outcome (survived or not) given the input features it can also estimates the posterior probability of the outcome given the input features using the predict_proba method:


In [ ]:
target_predicted_proba = logreg.predict_proba(features_test)
target_predicted_proba[:5]

By default the decision threshold is 0.5: if we vary the decision threshold from 0 to 1 we could generate a family of binary classifier models that address all the possible trade offs between false positive and false negative prediction errors.

We can summarize the performance of a binary classifier for all the possible thresholds by plotting the ROC curve and quantifying the Area under the ROC curve:


In [ ]:
from sklearn.metrics import roc_curve
from sklearn.metrics import auc

def plot_roc_curve(target_test, target_predicted_proba):
    fpr, tpr, thresholds = roc_curve(target_test, target_predicted_proba[:, 1])
    
    roc_auc = auc(fpr, tpr)
    # Plot ROC curve
    plt.plot(fpr, tpr, label='ROC curve (area = %0.3f)' % roc_auc)
    plt.plot([0, 1], [0, 1], 'k--')  # random predictions curve
    plt.xlim([0.0, 1.0])
    plt.ylim([0.0, 1.0])
    plt.xlabel('False Positive Rate or (1 - Specifity)')
    plt.ylabel('True Positive Rate or (Sensitivity)')
    plt.title('Receiver Operating Characteristic')
    plt.legend(loc="lower right")

In [ ]:
plot_roc_curve(target_test, target_predicted_proba)

Here the area under ROC curve is 0.756 which is very similar to the accuracy (0.732). However the ROC-AUC score of a random model is expected to 0.5 on average while the accuracy score of a random model depends on the class imbalance of the data. ROC-AUC can be seen as a way to callibrate the predictive accuracy of a model against class imbalance.

It is possible to see the details of the false positive and false negative errors by computing the confusion matrix:


In [ ]:
from sklearn.metrics import confusion_matrix

print(confusion_matrix(target_test, target_predicted))

Another way to quantify the quality of a binary classifier on imbalanced data is to compute the precision, recall and f1-score of a model (at the default fixed decision threshold of 0.5).


In [ ]:
from sklearn.metrics import classification_report

print(classification_report(target_test, target_predicted,
                            target_names=['not survived', 'survived']))

Cross-validation

We previously decided to randomly split the data to evaluate the model on 20% of held-out data. However the location randomness of the split might have a significant impact in the estimated accuracy:


In [ ]:
features_train, features_test, target_train, target_test = train_test_split(
    features_array, target, test_size=0.20, random_state=0)

logreg.fit(features_train, target_train).score(features_test, target_test)

In [ ]:
features_train, features_test, target_train, target_test = train_test_split(
    features_array, target, test_size=0.20, random_state=1)

logreg.fit(features_train, target_train).score(features_test, target_test)

In [ ]:
features_train, features_test, target_train, target_test = train_test_split(
    features_array, target, test_size=0.20, random_state=2)

logreg.fit(features_train, target_train).score(features_test, target_test)

So instead of using a single train / test split, we can use a group of them and compute the min, max and mean scores as an estimation of the real test score while not underestimating the variability:


In [ ]:
from sklearn.cross_validation import cross_val_score

scores = cross_val_score(logreg, features_array, target, cv=5)
scores

In [ ]:
scores.min(), scores.mean(), scores.max()

cross_val_score reports accuracy by default be it can also be used to report other performance metrics such as ROC-AUC or f1-score:


In [ ]:
scores = cross_val_score(logreg, features_array, target, cv=5,
                         scoring='roc_auc')
scores.min(), scores.mean(), scores.max()

Exercise:

  • Compute cross-validated scores for other classification metrics ('precision', 'recall', 'f1', 'accuracy'...).

  • Change the number of cross-validation folds between 3 and 10: what is the impact on the mean score? on the processing time?

Hints:

The list of classification metrics is available in the online documentation:

http://scikit-learn.org/stable/modules/model_evaluation.html#common-cases-predefined-values

You can use the %%time cell magic on the first line of an IPython cell to measure the time of the execution of the cell.


In [ ]:
scores = cross_val_score(logreg, features_array, target, cv=5,
                         scoring='f1')
scores.min(), scores.mean(), scores.max()

In [ ]:
%%time

scores = cross_val_score(logreg, features_array, target, cv=3,
                         scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

In [ ]:
%%time

scores = cross_val_score(logreg, features_array, target, cv=5,
                         scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

In [ ]:
%%time

scores = cross_val_score(logreg, features_array, target, cv=10,
                         scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

Increasing the number of folds leads to longer training and evaluation times as cv=10 means that ten different models will be trained instead of 3 for cv=3.

Furthermore larger values for cv will cause larger training set (cv=10 means that each split will have 90% of the data for training and 10% held out data for evaluation). Models tend to perform better when they have more data. However here the mean score stays almost constant, possibly because our model is too simplistic to be able to benefit from larger training sets.

More feature engineering and richer models

Let us now try to build richer models by including more features as potential predictors for our model.

Categorical variables such as data.Embarked or data.Sex can be converted as boolean indicators features also known as dummy variables or one-hot-encoded features:


In [ ]:
pd.get_dummies(data.Sex, prefix='Sex').head(5)

In [ ]:
pd.get_dummies(data.Embarked, prefix='Embarked').head(5)

We can combine those new numerical features with the previous features using pandas.concat along axis=1:


In [ ]:
rich_features = pd.concat([data.get(['Fare', 'Pclass', 'Age']),
                           pd.get_dummies(data.Sex, prefix='Sex'),
                           pd.get_dummies(data.Embarked, prefix='Embarked')],
                          axis=1)
rich_features.head(5)

By construction the new Sex_male feature is redundant with Sex_female. Let us drop it:


In [ ]:
rich_features_no_male = rich_features.drop('Sex_male', 1)
rich_features_no_male.head(5)

Let us not forget to imput the median age for passengers without age information:


In [ ]:
rich_features_final = rich_features_no_male.fillna(rich_features_no_male.dropna().median())
rich_features_final.head(5)

We can finally cross-validate a logistic regression model on this new data an observe that the mean score has significantly increased:


In [ ]:
%%time

from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import cross_val_score

logreg = LogisticRegression(C=1)
scores = cross_val_score(logreg, rich_features_final, target, cv=5, scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

Exercise:

  • change the value of the parameter C. Does it have an impact on the score?

  • fit a new instance of the logistic regression model on the full dataset.

  • plot the weights for the features of this newly fitted logistic regression model.


In [ ]:
logreg_new = LogisticRegression(C=1).fit(rich_features_final, target)

feature_names = rich_features_final.columns.values
x = np.arange(len(feature_names))
plt.bar(x, logreg_new.coef_.ravel())
_ = plt.xticks(x + 0.5, feature_names, rotation=30)

The model thinks that being a rich young girl embarked in Cherbourg can help survive the Titanic.

Training Non-linear models: ensembles of randomized trees

sklearn also implement non linear models that are known to perform very well for data-science projects where datasets have not too many features (e.g. less than 5000).

In particular let us have a look at Random Forests and Gradient Boosted Trees:


In [ ]:
%%time

from sklearn.ensemble import RandomForestClassifier

rf = RandomForestClassifier(n_estimators=100)
scores = cross_val_score(rf, rich_features_final, target, cv=5, n_jobs=4,
                         scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

In [ ]:
%%time

from sklearn.ensemble import GradientBoostingClassifier

gb = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
                                subsample=.8, max_features=.5)
scores = cross_val_score(gb, rich_features_final, target, cv=5, n_jobs=4,
                         scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

Both models seem to do slightly better than the logistic regression model on this data.

Exercise:

  • Change the value of the learning_rate and other GradientBoostingClassifier parameter, can you get a better mean score?

  • Would treating the PClass variable as categorical improve the models performance?

  • Find out which predictor variables (features) are the most informative for those models.

Hints:

Fitted ensembles of trees have feature_importance_ attribute that can be used similarly to the coef_ attribute of linear models.

Let us now rebuild a new version of the data by treating the class as categorical variable:


In [ ]:
features = pd.concat([data.get(['Fare', 'Age']),
                      pd.get_dummies(data.Sex, prefix='Sex'),
                      pd.get_dummies(data.Pclass, prefix='Pclass'),
                      pd.get_dummies(data.Embarked, prefix='Embarked')],
                     axis=1)
features = features.drop('Sex_male', 1)
features = features.fillna(features.dropna().median())
features.head(5)

In [ ]:
logreg = LogisticRegression(C=1)
scores = cross_val_score(logreg, features, target, cv=5, scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

In [ ]:
gb = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
                                subsample=.8, max_features=.5)
scores = cross_val_score(gb, features, target, cv=5, n_jobs=4,
                         scoring='accuracy')
print(scores.min(), scores.mean(), scores.max())

This does not seem to impact the models performance in a significant manner.

Let us finally plot the variable importances for our Gradient Boosted Tree model.


In [ ]:
gb_new = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
                                    subsample=.8, max_features=.5)
gb_new.fit(features, target)
feature_names = features.columns.values
x = np.arange(len(feature_names))
plt.bar(x, gb_new.feature_importances_)
_ = plt.xticks(x + 0.5, feature_names, rotation=30)

Automated parameter tuning

Instead of changing the value of the learning rate manually and re-running the cross-validation, we can find the best values for the parameters automatically (assuming we are ready to wait):


In [ ]:
%%time

from sklearn.grid_search import GridSearchCV

gb = GradientBoostingClassifier(n_estimators=100, subsample=.8)

params = {
    'learning_rate': [0.05, 0.1, 0.5],
    'max_features': [0.5, 1],
    'max_depth': [3, 4, 5],
}
gs = GridSearchCV(gb, params, cv=5, scoring='roc_auc', n_jobs=4)
gs.fit(features, target)

Let us sort the models by mean validation score:


In [ ]:
sorted(gs.grid_scores_, key=lambda x: x.mean_validation_score)

In [ ]:
gs.best_score_

In [ ]:
gs.best_params_

We should not that the mean scores are very close to one another and almost always within one standard deviation of one another. This means that all those parameters are quite reasonable. The only parameter of importance seems to be the learning_rate: 0.5 seems to be a bit too high.

Avoiding data snooping with pipelines

When doing imputation in pandas, prior to computing the train test split we use data from the test to improve the accuracy of the median value that we impute on the training set. This is actually cheating. To avoid this we should compute the median of the features on the training fold and use that median value to do the imputation both on the training and validation fold for a given CV split.

To do this we can prepare the features as previously but without the imputation: we just replace missing values by the -1 marker value:


In [ ]:
features = pd.concat([data.get(['Fare', 'Age']),
                      pd.get_dummies(data.Sex, prefix='Sex'),
                      pd.get_dummies(data.Pclass, prefix='Pclass'),
                      pd.get_dummies(data.Embarked, prefix='Embarked')],
                     axis=1)
features = features.drop('Sex_male', 1)

# Because of the following bug we cannot use NaN as the missing
# value marker, use a negative value as marker instead:
# https://github.com/scikit-learn/scikit-learn/issues/3044
features = features.fillna(-1)
features.head(5)

We can now use the Imputer transformer of scikit-learn to find the median value on the training set and apply it on missing values of both the training set and the test set.


In [ ]:
from sklearn.cross_validation import train_test_split

X_train, X_test = train_test_split(features.values)

In [ ]:
from sklearn.preprocessing import Imputer

imputer = Imputer(strategy='median', missing_values=-1)

imputer.fit(X_train)

The median age computed on the training set is stored in the statistics_ attribute.


In [ ]:
imputer.statistics_[1]

Imputation can now happen by calling the transform method:


In [ ]:
X_train_imputed = imputer.transform(X_train)
X_test_imputed = imputer.transform(X_test)

In [ ]:
np.any(X_train == -1)

In [ ]:
np.any(X_train_imputed == -1)

In [ ]:
np.any(X_test == -1)

In [ ]:
np.any(X_test_imputed == -1)

We can now use a pipeline that wraps an imputer transformer and the classifier itself:


In [ ]:
from sklearn.pipeline import Pipeline

imputer = Imputer(strategy='median', missing_values=-1)

classifier = GradientBoostingClassifier(n_estimators=100, learning_rate=0.1,
                                        subsample=.8, max_features=.5)

pipeline = Pipeline([
    ('imp', imputer),
    ('clf', classifier),
])

scores = cross_val_score(pipeline, features.values, target, cv=5, n_jobs=4,
                         scoring='accuracy', )
print(scores.min(), scores.mean(), scores.max())

The mean cross-validation is slightly lower than we used the imputation on the whole data as we did earlier although not by much. This means that in this case the data-snooping was not really helping the model cheat by much.

Let us re-run the grid search, this time on the pipeline. Note that thanks to the pipeline structure we can optimize the interaction of the imputation method with the parameters of the downstream classifier without cheating:


In [ ]:
%%time

params = {
    'imp__strategy': ['mean', 'median'],
    'clf__max_features': [0.5, 1],
    'clf__max_depth': [3, 4, 5],
}
gs = GridSearchCV(pipeline, params, cv=5, scoring='roc_auc', n_jobs=4)
gs.fit(features, target)

In [ ]:
sorted(gs.grid_scores_, key=lambda x: x.mean_validation_score)

In [ ]:
gs.best_score_

In [ ]:
gs.best_params_

From this search we can conclude that the imputation by the 'mean' strategy is generally a slightly better imputation strategy when training a GBRT model on this data.

Further integrating sklearn and pandas

Helper tool for better sklearn / pandas integration: https://github.com/paulgb/sklearn-pandas by making it possible to embed the feature construction from the raw dataframe directly inside a pipeline.

Credits

Thanks to:

  • Kaggle for setting up the Titanic challenge.

  • This blog post by Philippe Adjiman for inspiration:

http://www.philippeadjiman.com/blog/2013/09/12/a-data-science-exploration-from-the-titanic-in-r/


In [ ]: